86 research outputs found

    Does money matter in inflation forecasting?

    Get PDF
    This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.Forecasting ; Inflation (Finance) ; Monetary theory

    Connectionist natural language parsing

    Get PDF
    The key developments of two decades of connectionist parsing are reviewed. Connectionist parsers are assessed according to their ability to learn to represent syntactic structures from examples automatically, without being presented with symbolic grammar rules. This review also considers the extent to which connectionist parsers offer computational models of human sentence processing and provide plausible accounts of psycholinguistic data. In considering these issues, special attention is paid to the level of realism, the nature of the modularity, and the type of processing that is to be found in a wide range of parsers

    Fast Learning Neural Nets with Adaptive Learning Styles

    Get PDF
    There are many learning methods in artificial neural networks. Depending on the application, one learning or weight update rule may be more suitable than another, but the choice is not always clear-cut, despite some fundamental constraints, such as whether the learning is supervised or unsupervised. This paper addresses the learning style selection problem by proposing an adaptive learning style. Initially, some observations concerning the nature of adaptation and learning are discussed in the context of the underlying motivations for the research, and this paves the way for the description of an example system. The approach harnesses the complementary strengths of two forms of learning which are dynamically combined in a rapid form of adaptation that balances minimalist pattern intersection learning with Learning Vector Quantization. Both methods are unsupervised, but the balance between the two is determined by a performance feedback parameter. The result is a data-driven system that shifts between alternative solutions to pattern classification problems rapidly when performance is poor, whilst adjusting to new data slowly, and residing in the vicinity of a solution when performance is good

    Performance-guided Neural Network for Self-Organising Network Management

    Get PDF
    A neural network architecture is introduced for real-time learning of input sequences using external performance feedback. Some aspects of Adaptive Resonance Theory (ART) networks [1] are applied because they are able to function in a fast real-time adaptive active network environment where user requests and new proxylets (services) are constantly being introduced over time [2,3]. The architecture learns, self-organis es and self-stabilises in response to user requests, mapping the requests according to the types of proxylets available. However, in order make the neural networks respond to performance feedback, we introduce a modification to the original ART1 network in the form of the ‘snap-drift’ algorithm, that uses fast convergent, minimalist learning (snap) when the overall network performance is poor, and slow learning (drift towards user request input pattern) when the performance is good. Preliminary simulations evaluate the two-tiered architecture using a simple operating environment consisting of simulated training and test data

    Snap-Drift: Real-time, Performance-guided Learning

    Get PDF
    A novel approach for real-time learning and mapping of patterns using an external performance indicator is described. The learning makes use of the 'snap-drift' algorithm based on the concept of fast, convergent, minimalist learning (snap) when the overall network performance has been poor and slower, cautious learning (drift towards user request input patterns) when the performance has been good, in a non-stationary environment where new patterns are being introduces over time. Snap is based on adaptive resonance; and drift is based on learning vector quantization (LVQ). The two are combined in a semi-supervised system that shifts its learning style whenever it receives a change in performance feedback. The learning is capable of rapidly relearning and reestablishing, according to changes in feedback or patterns. We have used this algorithm in the design of a modular neural network system, known as performance-guided adaptive resonance theory (P-ART). Simulation results show that it discovers alternative solutions in response to a significantly changed situation, in terms of the input vectors (patterns) and/or of the environment, which may require the patterns to be treated differently over time
    corecore